129 research outputs found

    A finite element analysis of bending stresses induced in external and internal involute spur gears

    Get PDF
    This paper describes the use of the finite element method for predicting the fillet stress distribution experienced by loaded spur gears. The location of the finite element model boundary and the element mesh density are investigated. Fillet stresses predicted by the finite element model are compared with the results of photoelastic experiments. Both external and internal spur gear tooth forms are considered

    An analysis strategy for large fault trees

    Get PDF
    In recent years considerable progress has been made on improving the efficiency and accuracy of the fault tree methodology. The majority of fault trees produced to model industrial systems can now be analysed very quickly on PC computers. However there can still be problems with very large fault tree structures such as those developed to model nuclear and aerospace systems. If the fault tree consists of a large number of basic events and gates and many of the events are repeated, possibly several times within the structure, then the processing of the full problem may not be possible. In such circumstances the problem has to be reduced to a manageable size by discarding the less significant failure modes in the qualitative evaluation to produce only the most relevant minimal cut sets and approximations used to obtain the top event probability or frequency. The method proposed uses a combination of analysis options each of which reduces the complexity of the problem. A factorisation technique is first applied which is designed to reduce the ‘noise’ from the tree structure. Wherever possible, events which always appear together in the tree are combined together to create more complex, higher level events. A solution of the now reduced problem can always be expanded back out in terms of the original events. The second stage is to identify independent sections of the fault tree which can be analysed separately. Finally the Binary Decision Diagram (BDD) technique is used to perform the quantification. Careful selection of the ordering applied to the basic events (variables) will again aid the efficiency of the process

    Identifying the major contributions to risk in phased missions

    Get PDF
    Many systems operate phased missions. The mission consists of a number of consecutive phases where the functional requirement of the system changes during each phase. A successful mission is the completion of each of the consecutive phases. For non-repairable systems, efficient analysis methods have recently been developed to predict the mission unreliability. In the event that the predicted performance falls below that which is required, modifications are made to improve the design. In conventional system failure analysis importance measures, which identify the contribution each component makes to the failure, can be used to identify the weaknesses. Importance measures relevant for phased mission applications are developed in this paper

    Component contributions to the failure of systems undergoing phased missions

    Get PDF
    The way that many systems are utilised can be expressed in terms of missions which are split into a sequence of contiguous phases. Mission success is only achieved if each of the phases is successful and each phase is required to achieve a different objective and use different elements of the system. The reliability analysis of a phased mission system will produce the probability of failure during each of the phases together with the overall mission failure likelihood. In the event that the system performance does not meet with the acceptance requirement, weaknesses in the design are identified and improvements made to rectify the deficiencies. In conventional system assessments, importance measures can be predicted which provide a numerical indicator of the significance that each component plays in the system failure. Through the development of appropriate importance measures this paper provides ways of identifying the contribution made by each component failure to each phase failure and the overall mission failure. In addition a means to update the system performance prediction and the importance measures as phases of the mission are successfully completed is given

    A binary decision diagram method for phased mission analysis of non-repairable systems

    Get PDF
    Phased mission analysis is carried out to predict the reliability of systems which undergo a series of phases, each with differing requirements for success, with the mission objective being achieved only on the successful completion of all phases. Many systems from a range of industries experience such missions. The methods used for phased mission analysis are dependent upon the repairability of the system during the phases. If the system is non-repairable, fault-tree-based methods offer an efficient solution. For repairable systems, Markov approaches can be used. This paper is concerned with the analysis of non-repairable systems. When the phased mission failure causes are represented using fault trees, it is shown that the binary decision diagram (BDD) method of analysis offers advantages in the solution process. A new way in which BDD models can be efficiently developed for phased mission analysis is proposed. The paper presents a methodology by which the phased mission models can be developed and analysed to produce the phase failure modes and the phase failure likelihoods

    Analysis methods for fault trees that contain secondary failures

    Get PDF
    The fault tree methodology is appropriate when the component level failures (basic events) occur independently. One situation where the conditions of independence are not met occurs when secondary failure events appear in the fault tree structure. Guidelines for fault tree construction that have been utilized for many years encourage the inclusion of secondary failures along with primary failures and command faults in the representation of the failure logic. The resulting fault tree is an accurate representation of the logic but may produce inaccurate quantitative results for the probability and frequency of system failure if methodologies are used that rely on independence. This paper illustrates how inaccurate these quantitative results can be. Alternative approaches are developed by which fault trees of this type of structure can be analysed

    Analysis of fault trees with secondary failures

    Get PDF
    The Fault Tree methodology is appropriate when the component level failures (basic events) occur independently. One situation where the conditions of independence are not met occurs when secondary failure events appear in the fault tree structure. Guidelines for fault tree construction, which have been utilised for many years, encourage the inclusion of secondary failures along with primary failures and command faults in the representation of the failure logic. The resulting fault tree is an accurate representation of the logic but may produce inaccurate quantitative results for the probability and frequency of system failure if methodologies are used which reply on independence. This paper illustrates how inaccurate these quantitative results can be. Alternative approaches are developed by which fault trees of this type of structure can be analysed

    Choosing a heuristic for the “fault tree to binary decision diagram” conversion, using neural networks

    Get PDF
    Fault-tree analysis is commonly used for risk assessment of industrial systems. Several computer packages are available to carry out the analysis. Despite its common usage there are associated limitations of the technique in terms of accuracy and efficiency when dealing with large fault-tree structures. The most recent approach to aid the analysis of the fault-tree diagram is the BDD (binary decision diagram). To use the BDD, the fault-tree structure needs to be converted into the BDD format. Converting the fault tree is relatively straightforward but requires that the basic events of the tree be ordered. This ordering is critical to the resulting size of the BDD, and ultimately affects the qualitative and quantitative performance and benefits of this technique. Several heuristic approaches were developed to produce an optimal ordering permutation for a specific tree. These heuristic approaches do not always yield a minimal BDD structure for all trees. There is no single heuristic that guarantees a minimal BDD for any fault-tree structure. This paper looks at a selection approach using a neural network to choose the best heuristic from a set of alternatives that will yield the smallest BDD and promote an efficient analysis. The set of possible selection choices are 6 alternative heuristics, and the prediction capacity produced was a 70% chance of the neural network choosing the best ordering heuristic from the set of 6 for the test set of given fault trees

    Fault tree conversion to binary decision diagrams

    Get PDF
    Fault Tree Analysis is a commonly used technique to predict the causes of a specific system failure mode and to then determine the likelihood of this event. Over recent years the Binary Decision Diagram (BDD) method has been developed for the solution of the fault tree. It can be shown that this approach has advantages in terms of both accuracy and efficiency over the conventional method of analysis formulated in the 1970’s. The BDD expresses the failure logic in a disjoint form which gives it an advantage from the computational viewpoint. Fault Trees, however, remain the better way to represent the system failure causality. Therefore the usual way of taking advantage of the BDD structure is to construct a fault tree and then convert this to a BDD. It is on the fault tree conversion process that this paper will focus. In order to construct a BDD the variables which represent the occurrence of the basic events in the fault tree have to be placed in an ordering. Depending on the ordering selected an efficient representation of the failure logic can be obtained or if a poor ordering is selected a less efficient analysis will result. Once the ordering is established one approach is to utilise a set of rules developed by Rauzy which are repeatedly applied to generate the BDD. An alternative approach can be used whereby BDD constructs for each of the gate types are first formed and then joined together as specified by the gates in the fault tree. Some comments on the effectiveness of these approaches will be provided

    To identify the smallest fault tree sections which contain dependencies.

    Get PDF
    Since the early 1960’s fault tree analysis has become the most frequently used technique to quantify the likelihood of a particular system failure mode. One of the underlying assumptions which justifies this approach is that the basic events are independent. However, many systems feature component failure events for which the assumption of independence is not valid. For example, standby dependency, maintenance dependency or sequential dependency can be encountered in engineering systems. In such situations, Markov analysis is required during the quantification process. Since the efficiency of the Markov analysis largely depends on the size of the established Markov model, it is most effective to apply the Markov method only to the smallest possible fault tree sections containing dependencies. The remainder of the system assessment can be performed by the application of the conventional assessment techniques. The key of this approach is to extract from the fault tree the smallest sections which contain dependencies. This paper gives a brief introduction on some main existing dependency types and provides a method aimed at establishing the smallest Markov model for the dependencies contained within the fault tree
    corecore